Quantum generative adversarial learning in a superconducting quantum circuit
نویسندگان
چکیده
منابع مشابه
Decoherence in a superconducting quantum bit circuit
G. Ithier,1 E. Collin,1 P. Joyez,1 P. J. Meeson,1,2 D. Vion,1 D. Esteve,1 F. Chiarello,3 A. Shnirman,4 Y. Makhlin,4,5 J. Schriefl,4,6 and G. Schön4 1Quantronics Group, Service de Physique de l’Etat Condensé, DSM/DRECAM, CEA Saclay, 91191 Gif-sur-Yvette, France 2Department of Physics, Royal Holloway, University of London, Egham Hill, Egham, Surrey TW20 0EX, United Kingdom 3Istituto di Fotonica e...
متن کاملA twofold quantum delayed-choice experiment in a superconducting circuit
Wave-particle complementarity lies at the heart of quantum mechanics. To illustrate this mysterious feature, Wheeler proposed the delayed-choice experiment, where a quantum system manifests the wave- or particle-like attribute, depending on the experimental arrangement, which is made after the system has entered the interferometer. In recent quantum delayed-choice experiments, these two complem...
متن کاملQuantum phases in circuit QED with a superconducting qubit array
Circuit QED on a chip has become a powerful platform for simulating complex many-body physics. In this report, we realize a Dicke-Ising model with an antiferromagnetic nearest-neighbor spin-spin interaction in circuit QED with a superconducting qubit array. We show that this system exhibits a competition between the collective spin-photon interaction and the antiferromagnetic nearest-neighbor s...
متن کاملGenerative Adversarial Active Learning
We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN). Different from regular active learning, the resulting algorithm adaptively synthesizes training instances for querying to increase learning speed. We generate queries according to the uncertainty principle, but our idea can work with other active learning principles. We report results from ...
متن کاملGenerative Adversarial Imitation Learning
Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert’s cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Science Advances
سال: 2019
ISSN: 2375-2548
DOI: 10.1126/sciadv.aav2761